Goto

Collaborating Authors

 trust value


Social and Physical Attributes-Defined Trust Evaluation for Effective Collaborator Selection in Human-Device Coexistence Systems

Zhu, Botao, Wang, Xianbin

arXiv.org Machine Learning

In human-device coexistence systems, collaborations among devices are determined by not only physical attributes such as network topology but also social attributes among human users. Consequently, trust evaluation of potential collaborators based on these multifaceted attributes becomes critical for ensuring the eventual outcome. However, due to the high heterogeneity and complexity of physical and social attributes, efficiently integrating them for accurate trust evaluation remains challenging. To overcome this difficulty, a canonical correlation analysis-enhanced hypergraph self-supervised learning (HSLCCA) method is proposed in this research. First, by treating all attributes as relationships among connected devices, a relationship hypergraph is constructed to comprehensively capture inter-device relationships across three dimensions: spatial attribute-related, device attribute-related, and social attribute-related. Next, a self-supervised learning framework is developed to integrate these multi-dimensional relationships and generate device embeddings enriched with relational semantics. In this learning framework, the relationship hypergraph is augmented into two distinct views to enhance semantic information. A parameter-sharing hypergraph neural network is then utilized to learn device embeddings from both views. To further enhance embedding quality, a CCA approach is applied, allowing the comparison of data between the two views. Finally, the trustworthiness of devices is calculated based on the learned device embeddings. Extensive experiments demonstrate that the proposed HSLCCA method significantly outperforms the baseline algorithm in effectively identifying trusted devices.


Do You Trust the Process?: Modeling Institutional Trust for Community Adoption of Reinforcement Learning Policies

Balepur, Naina, Pei, Xingrui, Sundaram, Hari

arXiv.org Artificial Intelligence

Many governmental bodies are adopting AI policies for decision-making. In particular, Reinforcement Learning has been used to design policies that citizens would be expected to follow if implemented. Much RL work assumes that citizens follow these policies, and evaluate them with this in mind. However, we know from prior work that without institutional trust, citizens will not follow policies put in place by governments. In this work, we develop a trust-aware RL algorithm for resource allocation in communities. We consider the case of humanitarian engineering, where the organization is aiming to distribute some technology or resource to community members. We use a Deep Deterministic Policy Gradient approach to learn a resource allocation that fits the needs of the organization. Then, we simulate resource allocation according to the learned policy, and model the changes in institutional trust of community members. We investigate how this incorporation of institutional trust affects outcomes, and ask how effectively an organization can learn policies if trust values are private. We find that incorporating trust into RL algorithms can lead to more successful policies, specifically when the organization's goals are less certain. We find more conservative trust estimates lead to increased fairness and average community trust, though organization success suffers. Finally, we explore a strategy to prevent unfair outcomes to communities. We implement a quota system by an external entity which decreases the organization's utility when it does not serve enough community members. We find this intervention can improve fairness and trust among communities in some cases, while decreasing the success of the organization. This work underscores the importance of institutional trust in algorithm design and implementation, and identifies a tension between organization success and community well-being.


Trusted Routing for Blockchain-Empowered UAV Networks via Multi-Agent Deep Reinforcement Learning

Jia, Ziye, He, Sijie, Zhu, Qiuming, Wang, Wei, Wu, Qihui, Han, Zhu

arXiv.org Artificial Intelligence

Due to the high flexibility and versatility, unmanned aerial vehicles (UAVs) are leveraged in various fields including surveillance and disaster rescue.However, in UAV networks, routing is vulnerable to malicious damage due to distributed topologies and high dynamics. Hence, ensuring the routing security of UAV networks is challenging. In this paper, we characterize the routing process in a time-varying UAV network with malicious nodes. Specifically, we formulate the routing problem to minimize the total delay, which is an integer linear programming and intractable to solve. Then, to tackle the network security issue, a blockchain-based trust management mechanism (BTMM) is designed to dynamically evaluate trust values and identify low-trust UAVs. To improve traditional practical Byzantine fault tolerance algorithms in the blockchain, we propose a consensus UAV update mechanism. Besides, considering the local observability, the routing problem is reformulated into a decentralized partially observable Markov decision process. Further, a multi-agent double deep Q-network based routing algorithm is designed to minimize the total delay. Finally, simulations are conducted with attacked UAVs and numerical results show that the delay of the proposed mechanism decreases by 13.39$\%$, 12.74$\%$, and 16.6$\%$ than multi-agent proximal policy optimal algorithms, multi-agent deep Q-network algorithms, and methods without BTMM, respectively.


Rapid and Continuous Trust Evaluation for Effective Task Collaboration Through Siamese Model

Zhu, Botao, Wang, Xianbin

arXiv.org Artificial Intelligence

Trust is emerging as an effective tool to ensure the successful completion of collaborative tasks within collaborative systems. However, rapidly and continuously evaluating the trustworthiness of collaborators during task execution is a significant challenge due to distributed devices, complex operational environments, and dynamically changing resources. To tackle this challenge, this paper proposes a Siamese-enabled rapid and continuous trust evaluation framework (SRCTE) to facilitate effective task collaboration. First, the communication and computing resource attributes of the collaborator in a trusted state, along with historical collaboration data, are collected and represented using an attributed control flow graph (ACFG) that captures trust-related semantic information and serves as a reference for comparison with data collected during task execution. At each time slot of task execution, the collaborator's communication and computing resource attributes, as well as task completion effectiveness, are collected in real time and represented with an ACFG to convey their trust-related semantic information. A Siamese model, consisting of two shared-parameter Structure2vec networks, is then employed to learn the deep semantics of each pair of ACFGs and generate their embeddings. Finally, the similarity between the embeddings of each pair of ACFGs is calculated to determine the collaborator's trust value at each time slot. A real system is built using two Dell EMC 5200 servers and a Google Pixel 8 to test the effectiveness of the proposed SRCTE framework. Experimental results demonstrate that SRCTE converges rapidly with only a small amount of data and achieves a high anomaly trust detection rate compared to the baseline algorithm.


Enhancing Trust Management System for Connected Autonomous Vehicles Using Machine Learning Methods: A Survey

Xu, Qian, Zhang, Lei, Liu, Yixiao

arXiv.org Artificial Intelligence

Connected Autonomous Vehicles (CAVs) operate in dynamic, open, and multi-domain networks, rendering them vulnerable to various threats. Trust Management Systems (TMS) systematically organize essential steps in the trust mechanism, identifying malicious nodes against internal threats and external threats, as well as ensuring reliable decision-making for more cooperative tasks. Recent advances in machine learning (ML) offer significant potential to enhance TMS, especially for the strict requirements of CAVs, such as CAV nodes moving at varying speeds, and opportunistic and intermittent network behavior. Those features distinguish ML-based TMS from social networks, static IoT, and Social IoT. This survey proposes a novel three-layer ML-based TMS framework for CAVs in the vehicle-road-cloud integration system, i.e., trust data layer, trust calculation layer and trust incentive layer. A six-dimensional taxonomy of objectives is proposed. Furthermore, the principles of ML methods for each module in each layer are analyzed. Then, recent studies are categorized based on traffic scenarios that are against the proposed objectives. Finally, future directions are suggested, addressing the open issues and meeting the research trend. We maintain an active repository that contains up-to-date literature and open-source projects at https://github.com/octoberzzzzz/ML-based-TMS-CAV-Survey.


Opinion Dynamic Under Malicious Agent Influence in Multi-Agent Systems: From the Perspective of Opinion Evolution Cost

Suo, Yuhan, Chai, Runqi, Chai, Senchun, Farhan, Ishrak MD, Zhao, Xudong, Xia, Yuanqing

arXiv.org Artificial Intelligence

In human social systems, debates are often seen as a means to resolve differences of opinion. However, in reality, debates frequently incur significant communication costs, especially when dealing with stubborn opponents. Inspired by this phenomenon, this paper examines the impact of malicious agents on the evolution of normal agents' opinions from the perspective of opinion evolution cost, and proposes corresponding solutions for the scenario in which malicious agents hold different opinions in multi-agent systems(MASs). First, this paper analyzes the negative impact of malicious agents on the opinion evolution process, reveals the additional evolution cost it brings, and provides a theoretical basis for the subsequent solutions. Secondly, based on the characteristics of opinion evolution, the malicious agent isolation algorithm based on opinion evolution direction vector is proposed, which does not strongly restrict the proportion of malicious agents. Additionally, an evolution rate adjustment mechanism is introduced, allowing the system to flexibly regulate the evolution process in complex situations, effectively achieving the trade-off between opinion evolution rate and cost. Extensive numerical simulations demonstrate that the algorithm can effectively eliminate the negative influence of malicious agents and achieve a balance between opinion evolution costs and convergence speed.


Fast Distributed Optimization over Directed Graphs under Malicious Attacks using Trust

Dayı, Arif Kerem, Akgün, Orhan Eren, Gil, Stephanie, Yemini, Michal, Nedić, Angelia

arXiv.org Artificial Intelligence

In this work, we introduce the Resilient Projected Push-Pull (RP3) algorithm designed for distributed optimization in multi-agent cyber-physical systems with directed communication graphs and the presence of malicious agents. Our algorithm leverages stochastic inter-agent trust values and gradient tracking to achieve geometric convergence rates in expectation even in adversarial environments. We introduce growing constraint sets to limit the impact of the malicious agents without compromising the geometric convergence rate of the algorithm. We prove that RP3 converges to the nominal optimal solution almost surely and in the $r$-th mean for any $r\geq 1$, provided the step sizes are sufficiently small and the constraint sets are appropriately chosen. We validate our approach with numerical studies on average consensus and multi-robot target tracking problems, demonstrating that RP3 effectively mitigates the impact of malicious agents and achieves the desired geometric convergence.


Robust Zero Trust Architecture: Joint Blockchain based Federated learning and Anomaly Detection based Framework

Pokhrel, Shiva Raj, Yang, Luxing, Rajasegarar, Sutharshan, Li, Gang

arXiv.org Artificial Intelligence

This paper introduces a robust zero-trust architecture (ZTA) tailored for the decentralized system that empowers efficient remote work and collaboration within IoT networks. Using blockchain-based federated learning principles, our proposed framework includes a robust aggregation mechanism designed to counteract malicious updates from compromised clients, enhancing the security of the global learning process. Moreover, secure and reliable trust computation is essential for remote work and collaboration. The robust ZTA framework integrates anomaly detection and trust computation, ensuring secure and reliable device collaboration in a decentralized fashion. We introduce an adaptive algorithm that dynamically adjusts to varying user contexts, using unsupervised clustering to detect novel anomalies, like zero-day attacks. To ensure a reliable and scalable trust computation, we develop an algorithm that dynamically adapts to varying user contexts by employing incremental anomaly detection and clustering techniques to identify and share local and global anomalies between nodes. Future directions include scalability improvements, Dirichlet process for advanced anomaly detection, privacy-preserving techniques, and the integration of post-quantum cryptographic methods to safeguard against emerging quantum threats.


Trust Driven On-Demand Scheme for Client Deployment in Federated Learning

Chahoud, Mario, Mourad, Azzam, Otrok, Hadi, Bentahar, Jamal, Guizani, Mohsen

arXiv.org Artificial Intelligence

Containerization technology plays a crucial role in Federated Learning (FL) setups, expanding the pool of potential clients and ensuring the availability of specific subsets for each learning iteration. However, doubts arise about the trustworthiness of devices deployed as clients in FL scenarios, especially when container deployment processes are involved. Addressing these challenges is important, particularly in managing potentially malicious clients capable of disrupting the learning process or compromising the entire model. In our research, we are motivated to integrate a trust element into the client selection and model deployment processes within our system architecture. This is a feature lacking in the initial client selection and deployment mechanism of the On-Demand architecture. We introduce a trust mechanism, named "Trusted-On-Demand-FL", which establishes a relationship of trust between the server and the pool of eligible clients. Utilizing Docker in our deployment strategy enables us to monitor and validate participant actions effectively, ensuring strict adherence to agreed-upon protocols while strengthening defenses against unauthorized data access or tampering. Our simulations rely on a continuous user behavior dataset, deploying an optimization model powered by a genetic algorithm to efficiently select clients for participation. By assigning trust values to individual clients and dynamically adjusting these values, combined with penalizing malicious clients through decreased trust scores, our proposed framework identifies and isolates harmful clients. This approach not only reduces disruptions to regular rounds but also minimizes instances of round dismissal, Consequently enhancing both system stability and security.


Measuring Perceived Trust in XAI-Assisted Decision-Making by Eliciting a Mental Model

Onari, Mohsen Abbaspour, Grau, Isel, Nobile, Marco S., Zhang, Yingqian

arXiv.org Artificial Intelligence

This empirical study proposes a novel methodology to measure users' perceived trust in an Explainable Artificial Intelligence (XAI) model. To do so, users' mental models are elicited using Fuzzy Cognitive Maps (FCMs). First, we exploit an interpretable Machine Learning (ML) model to classify suspected COVID-19 patients into positive or negative cases. Then, Medical Experts' (MEs) conduct a diagnostic decision-making task based on their knowledge and then prediction and interpretations provided by the XAI model. In order to evaluate the impact of interpretations on perceived trust, explanation satisfaction attributes are rated by MEs through a survey. Then, they are considered as FCM's concepts to determine their influences on each other and, ultimately, on the perceived trust. Moreover, to consider MEs' mental subjectivity, fuzzy linguistic variables are used to determine the strength of influences. After reaching the steady state of FCMs, a quantified value is obtained to measure the perceived trust of each ME. The results show that the quantified values can determine whether MEs trust or distrust the XAI model. We analyze this behavior by comparing the quantified values with MEs' performance in completing diagnostic tasks.